Goto

Collaborating Authors

 nemo guardrail


The Structural Safety Generalization Problem

Broomfield, Julius, Gibbs, Tom, Kosak-Hine, Ethan, Ingebretsen, George, Nasir, Tia, Zhang, Jason, Iranmanesh, Reihaneh, Pieri, Sara, Rabbany, Reihaneh, Pelrine, Kellin

arXiv.org Artificial Intelligence

LLM jailbreaks are a widespread safety challenge. Given this problem has not yet been tractable, we suggest targeting a key failure mechanism: the failure of safety to generalize across semantically equivalent inputs. We further focus the target by requiring desirable tractability properties of attacks to study: explainability, transferability between models, and transferability between goals. We perform red-teaming within this framework by uncovering new vulnerabilities to multi-turn, multi-image, and translation-based attacks. These attacks are semantically equivalent by our design to their single-turn, single-image, or untranslated counterparts, enabling systematic comparisons; we show that the different structures yield different safety outcomes. We then demonstrate the potential for this framework to enable new defenses by proposing a Structure Rewriting Guardrail, which converts an input to a structure more conducive to safety assessment. This guardrail significantly improves refusal of harmful inputs, without over-refusing benign ones. Thus, by framing this intermediate challenge - more tractable than universal defenses but essential for long-term safety - we highlight a critical milestone for AI safety research.


Enhancing Guardrails for Safe and Secure Healthcare AI

Gangavarapu, Ananya

arXiv.org Artificial Intelligence

Generative AI holds immense promise in addressing global healthcare access challenges, with numerous innovative applications now ready for use across various healthcare domains. However, a significant barrier to the widespread adoption of these domain-specific AI solutions is the lack of robust safety mechanisms to effectively manage issues such as hallucination, misinformation, and ensuring truthfulness. Left unchecked, these risks can compromise patient safety and erode trust in healthcare AI systems. While general-purpose frameworks like Llama Guard are useful for filtering toxicity and harmful content, they do not fully address the stringent requirements for truthfulness and safety in healthcare contexts. This paper examines the unique safety and security challenges inherent to healthcare AI, particularly the risk of hallucinations, the spread of misinformation, and the need for factual accuracy in clinical settings. I propose enhancements to existing guardrails frameworks, such as Nvidia NeMo Guardrails, to better suit healthcare-specific needs. By strengthening these safeguards, I aim to ensure the secure, reliable, and accurate use of AI in healthcare, mitigating misinformation risks and improving patient safety.


NeMo Guardrails: A Toolkit for Controllable and Safe LLM Applications with Programmable Rails

Rebedea, Traian, Dinu, Razvan, Sreedhar, Makesh, Parisien, Christopher, Cohen, Jonathan

arXiv.org Artificial Intelligence

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or rails for short) are a specific way of controlling the output of an LLM, such as not talking about topics considered harmful, following a predefined dialogue path, using a particular language style, and more. There are several mechanisms that allow LLM providers and developers to add guardrails that are embedded into a specific model at training, e.g. using model alignment. Differently, using a runtime inspired from dialogue management, NeMo Guardrails allows developers to add programmable rails to LLM applications - these are user-defined, independent of the underlying LLM, and interpretable. Our initial results show that the proposed approach can be used with several LLM providers to develop controllable and safe LLM applications using programmable rails.


NVIDIA made an open source tool for creating safer and more secure AI models

Engadget

Since March, NVIDIA has offered AI Foundations, a service that allows businesses to train large language models (LLMs) on their own proprietary data. Today the company is introducing NeMo Guardrails, a tool designed to help developers ensure their generative AI apps are accurate, appropriate and safe. NeMo Guardrails allows software engineers to enforce three different kinds of limits on their in-house LLMs. Specifically, firms can set "topical guardrails" that will prevent their apps from addressing subjects they weren't trained to tackle. For instance, NVIDIA suggests a customer service chatbot would, with the help of its software, decline to answer a question about the weather.